Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a visualization utility to render tokens and annotations in a notebook #508

Merged
merged 14 commits into from
Dec 4, 2020
Merged

Add a visualization utility to render tokens and annotations in a notebook #508

merged 14 commits into from
Dec 4, 2020

Conversation

talolard
Copy link
Contributor

@talolard talolard commented Nov 5, 2020

This follows on the discussion we had here.

What It Does

User can get a visualization of the tokenized output with annotations

from tokenizers import EncodingVisualizer
from tokenizers.viz.viztypes import Annotation
from tokenizers import BertWordPieceTokenizer
tokenizer = BertWordPieceTokenizer("/tmp/bert-base-uncased-vocab.txt", lowercase=True)
visualizer = EncodingVisualizer(tokenizer=tokenizer,default_to_notebook=True)
annotations =[...]
visualizer(text,annotations=annotations)

image

Cool Features

  • [ ✔️] Automatically aligns annotations with tokens
  • [ ✔️] Supports annotations in any format (through a converter parameter)
  • [ ✔️] Renders UNK tokens
  • [ ✔️] Preserves white space
  • [ ✔️] Easy to use
  • [ ✔️] Hover over an annotation to clearly see its borders
  • [ ✔️] Clear distinction of adjacent tokens with alternating shades of grey

Missing Stuff

  • [ 💩 ] Tests - I need some guidance how to write tests for this lib
  • [ 💩] Code Review - Is this easy to understand and maintain ?
  • [ 💩] Right To Left Support - Some edge cases still remain with RTL
  • [ 💩] Special Tokens - Doesn't currently render CLS, SEP etc. Need some help with that.
  • [ 🐛] Bug - When an annotation crosses part of a long UNK token like
    image it renders multiple UNK tags on top.

Notebook

It has a notebook in the examples folder

@talolard
Copy link
Contributor Author

talolard commented Nov 5, 2020

@n1t0 can you give me some guidance on what's wrong with the docs build ?

@n1t0
Copy link
Member

n1t0 commented Nov 6, 2020

I should be able to have a deeper look today! Will let you know

Copy link
Member

@n1t0 n1t0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @talolard, this is really nice and clean, I love it!

I'm not entirely sure about the namespacing (tokenizers.viz vs tokenizers.notebooks or tokenizers.tools) and will need to think about it, but that's a detail.

For the error in the CI about the documentation, I think it is because we need to modify the setup.py to have it include what's necessary when we run python setup.py install|develop. I was having the same problem locally when trying to do import tokenizers in a Python shell.

Another little detail: we actually use Google-style for docstrings. If you're not familiar with this syntax, don't worry I'll take care of it. We can also include everything in the API Reference in the Sphinx docs.

Last thing that we'll need to check is if everything works as expected when using a tokenizer like the one from GPT-2 or Roberta. Since they use a byte-level technique, we can have multiple tokens that have overlapping spans over the input, for example with emojis or other Unicode characters that don't have their own token.

bindings/python/py_src/tokenizers/viz/visualizer.py Outdated Show resolved Hide resolved
bindings/python/py_src/tokenizers/viz/visualizer.py Outdated Show resolved Hide resolved
bindings/python/py_src/tokenizers/viz/visualizer.py Outdated Show resolved Hide resolved
bindings/python/py_src/tokenizers/viz/visualizer.py Outdated Show resolved Hide resolved
.idea/.gitignore Outdated Show resolved Hide resolved
bindings/python/py_src/tokenizers/__init__.pyi Outdated Show resolved Hide resolved
@talolard
Copy link
Contributor Author

talolard commented Nov 9, 2020

Last thing that we'll need to check is if everything works as expected when using a tokenizer like the one from GPT-2 or Roberta. Since they use a byte-level technique, we can have multiple tokens that have overlapping spans over the input, for example with emojis or other Unicode characters that don't have their own token.

I don't know about BPE enough to think of a test case. My poo emoji is a surrogate pair, but I guess the test needs to be on a surrogate pair that isn't in the vocab ? Any ideas ?

I also tried 'Z͑ͫ̓ͪ̂ͫ̽͏̴̙̤̞͉͚̯̞̠͍A̴̵̜̰͔ͫ͗͢L̠ͨͧͩ͘G̴̻͈͍͔̹̑͗̎̅͛́Ǫ̵̹̻̝̳͂̌̌͘!͖̬̰̙̗̿̋ͥͥ̂ͣ̐́́͜͞' and got back
image
which I can't tell if it's good or bad given that len(len("Z͑ͫ̓ͪ̂ͫ̽͏̴̙̤̞͉͚̯̞̠͍A̴̵̜̰͔ͫ͗͢L̠ͨͧͩ͘G̴̻͈͍͔̹̑͗̎̅͛́Ǫ̵̹̻̝̳͂̌̌͘!͖̬̰̙̗̿̋ͥͥ̂ͣ̐́́͜͞'") = 76

Can you think of an example to test ?
image

@talolard talolard requested a review from n1t0 November 10, 2020 17:28
@talolard
Copy link
Contributor Author

I'm not entirely sure about the namespacing (tokenizers.viz vs tokenizers.notebooks or tokenizers.tools) and will need to think about it, but that's a detail.

My inclination would be towards tools, or viz but I have no conviction.

For the error in the CI about the documentation, I think it is because we need to modify the setup.py to have it include what's necessary when we run python setup.py install|develop. I was having the same problem locally when trying to do import tokenizers in a Python shell.

Could you handle that, I'm not sure what to do. When I run setup.py develop it works, presumably because of something I don't understand.

Another little detail: we actually use Google-style for docstrings. If you're not familiar with this syntax, don't worry I'll take care of it. We can also include everything in the API Reference in the Sphinx docs.

I made an attempt to use Google style docstrings. There were some places where I wasn't sure about how to write out the typings. If you comment on things to fix I'll learn and fix.

Last thing that we'll need to check is if everything works as expected when using a tokenizer like the one from GPT-2 or Roberta.

I added something to the notebook, but per my comment above, not sure exactly what to test for.

@n1t0
Copy link
Member

n1t0 commented Nov 12, 2020

The current version of the input text is great I think to check if it works as expected. I just ran some tests with a last cell with the following code:

encoding = roberta_tokenizer.encode(text)
[(token, offset, text[offset[0]:offset[1]]) for (token, offset) in zip(encoding.tokens, encoding.offsets)]

which gives this kind of output:

[
...,
 ('Ġadd', (212, 216), ' add'),
 ('Ġa', (216, 218), ' a'),
 ('Ġunit', (218, 223), ' unit'),
 ('Ġtest', (223, 228), ' test'),
 ('Ġthat', (228, 233), ' that'),
 ('Ġcontains', (233, 242), ' contains'),
 ('Ġa', (242, 244), ' a'),
 ('Ġpile', (244, 249), ' pile'),
 ('Ġof', (249, 252), ' of'),
 ('Ġpo', (252, 255), ' po'),
 ('o', (255, 256), 'o'),
 ('Ġ(', (256, 258), ' ('),
 ('ðŁ', (258, 259), '💩'),
 ('Ĵ', (258, 259), '💩'),
 ('©', (258, 259), '💩'),
 (')', (259, 260), ')'),
 ('Ġin', (260, 263), ' in'),
 ('Ġa', (263, 265), ' a'),
 ('Ġstring', (265, 272), ' string'),
...,
 ('©', (280, 281), '💩'),
 ('ðŁ', (281, 282), '💩'),
 ('Ĵ', (281, 282), '💩'),
 ('©', (281, 282), '💩'),
 ('ðŁ', (282, 283), '💩'),
 ('Ĵ', (282, 283), '💩'),
 ('©', (282, 283), '💩'),
 ('ðŁ', (283, 284), '💩'),
 ('Ĵ', (283, 284), '💩'),
 ('©', (283, 284), '💩'),
 ('ðŁ', (284, 285), '💩'),
 ('Ĵ', (284, 285), '💩'),
 ('©', (284, 285), '💩'),
 ('ðŁ', (285, 286), '💩'),
 ('Ĵ', (285, 286), '💩'),
 ('©', (285, 286), '💩'),
 ('Ġand', (286, 290), ' and'),
 ('Ġsee', (290, 294), ' see'),
 ('Ġif', (294, 297), ' if'),
 ('Ġanything', (297, 306), ' anything'),
 ('Ġbreaks', (306, 313), ' breaks'),
...,
]

As you can see, there are actually a lot of tokens that we can't see because they are representing sub-parts of an actual Unicode code point.

I'd be curious to see what it looks like with a ByteLevelBPETokenizer trained on a language like Hebrew and see if the visualization actually makes sense in this case. Is this something you'd like to try?

Could you handle that, I'm not sure what to do. When I run setup.py develop it works, presumably because of something I don't understand.

Sure don't worry, I'll take care of anything left related to the integration!

@talolard
Copy link
Contributor Author

I'd be curious to see what it looks like with a ByteLevelBPETokenizer trained on a language like Hebrew and see if the visualization actually makes sense in this case. Is this something you'd like to try?

I actually did that and took it out because it was too much text. What do you think of a "gallery" notebook with a mix of langauges and tokenizers?

@n1t0
Copy link
Member

n1t0 commented Nov 12, 2020

Sure! That'd be a great way to check that everything works as expected.

@talolard
Copy link
Contributor Author

Sure! That'd be a great way to check that everything works as expected.

I added a notebook with some examples in different languages.

@n1t0 I actually noticed something strange. When I use the BPE tokenizer, whitespaces are included in the following token. I'm not sure if that's how it's supposed to be, or a bug in my code or in the tokenizers. Could you take a look at these pics and give guidance ?

image

@n1t0
Copy link
Member

n1t0 commented Nov 17, 2020

Thank you @talolard!

This is actually expected yes. The byte-level BPE also encodes the whitespace because it then allows it to decode back to the original sentence. From your pictures and the various examples in the notebooks, I think everything looks as expected for English (and probably other latin languages).

My current concern is about the other languages. I just tried checking the generated tokens with byte-level BPE using the example in Hebrew, and they don't seem to match the visualization at all.

>>> encoding = roberta_tokenizer.encode(texts["Hebrew"])
>>> [(token, offset, texts["Hebrew"][offset[0]:offset[1]]) for (token, offset) in zip(encoding.tokens, encoding.offsets)]
[('×ij', (0, 1), 'ב'),
 ('×', (1, 2), 'נ'),
 ('ł', (1, 2), 'נ'),
 ('×Ļ', (2, 3), 'י'),
 ('Ġ×', (3, 5), ' א'),
 ('IJ', (4, 5), 'א'),
 ('×', (5, 6), 'ד'),
 ('ĵ', (5, 6), 'ד'),
 ('×', (6, 7), 'ם'),
 ('Ŀ', (6, 7), 'ם'),
 ('Ġ×', (7, 9), ' ז'),
 ('ĸ', (8, 9), 'ז'),
 ('×ķ', (9, 10), 'ו'),
 ('ר', (10, 11), 'ר'),
 ('×Ļ×', (11, 13), 'ים'),
 ('Ŀ', (12, 13), 'ם'),
 ('Ġ×', (13, 15), ' ב'),
 ('ij', (14, 15), 'ב'),
 ('×', (15, 16), 'כ'),
 ('Ľ', (15, 16), 'כ'),
 ('׾', (16, 17), 'ל'),
 ('Ġ×', (17, 19), ' י'),
 ('Ļ', (18, 19), 'י'),
 ('×ķ', (19, 20), 'ו'),
 ('×', (20, 21), 'ם'),
 ('Ŀ', (20, 21), 'ם'),
 ('Ġ×', (21, 23), ' ל'),
 ('ľ', (22, 23), 'ל'),
 ('ר', (23, 24), 'ר'),
 ('×ķ', (24, 25), 'ו'),
 ('×', (25, 26), 'ח'),
 ('Ĺ', (25, 26), 'ח'),
 (',', (26, 27), ','),
...
]

As you can see, most tokens represent one character at most, and often there are two tokens for one character. Yet, in the visualization, it appears to be long tokens, which seems wrong.

I think the roberta byte-level BPE has been trained only on English and is not capable of tokenizing Hebrew correctly, that's why I wanted to see the result with a tokenizer trained specifically for Hebrew. I'd like to check if, as I expect it, in this case the result would look accurate. Does it make sense?

@talolard
Copy link
Contributor Author

talolard commented Nov 17, 2020

Nice catch.
I think you warned me about this before I started and I didn't understand. Also this is tricky...

What seems to be happening is that some characters are assigned to two tokens. I didn't anticipate that so the code just takes the last token of the pair. The coloring of tokens is done by alternating even and odd tokens so we end up skipping an even.
e.g. the two chars בנ
image
are tokenized as three tokens
image

And so we skip an even token in the css
image

.

I can color "multi token single chars" uniqugly and set a hover state that shows all the tokens assigned to a char which should clear that up visually, (thought it's confusing AF and would probably surprise users)

But, there's something another wierd thing I need your input on:

image

These two overlap, e.g. the char at position 4 א is assigned to two different tokens. Is that expected ?

@n1t0
Copy link
Member

n1t0 commented Nov 17, 2020

I can color "multi token single chars" uniqugly and set a hover state that shows all the tokens assigned to a char which should clear that up visually, (thought it's confusing AF and would probably surprise users)

I honestly don't know how we should handle this. I expect the BPE algorithm to learn the most common tokens without having any overlap in most cases, and small overlaps with rarely seen tokens that end up being decomposed, but I'm not sure at all. Maybe just making sure it alternates between the two shades of grey, while excluding any "multi token single chars" from the next tokens could be enough.

But, there's something another wierd thing I need your input on:
image
These two overlap, e.g. the char at position 4 א is assigned to two different tokens. Is that expected ?

Yes, in this case, the Ġ at the beginning of the first token represents a whitespace. So the first of these tokens is composed of the whitespace and a part of the character א (two characters in the span 3, 5), while the second one is just another part of the character א (with 4, 5). This is hard to visualize here because the spans are expressed in terms of Unicode code-point while each token has a unique span of non-overlapping bytes.
For example, we often see × as the first part of some characters, and I guess this byte actually represents the offset of the Unicode block reserved for Hebrew, while the second part is the byte that represents the actual character in this block. I expect these to get merged when a tokenizer is trained with Hebrew in the first place.

Maybe this post can help understand how the byte-level works: #203 (comment)

@talolard
Copy link
Contributor Author

OK this requires actual thinking. I'll tinker with it on the weekend and come back with something

@talolard
Copy link
Contributor Author

Sooo....
I set it up to track how many tokens "participate" in each charachter and visualize "multi token chars" differently + add a tooltip on hover.
image

I think it solves for the case in Hebrew you pointed to, e.g. the second and third chars reflect that they are in different tokens
image

My confidence with BPE is < 100% so I'm not sure this covers it all. What do you think?

@cceyda
Copy link

cceyda commented Nov 23, 2020

How would this compare to spacy's displacy? a while back I have done something to visualize (token classification) outputs that way. Something similar can be done just for tokens + annotations (just gotta write the huggingface->spacy align|formatter)
BTW nice work on LightTag

@talolard
Copy link
Contributor Author

How would this compare to spacy's displacy? a while back I have done something to visualize (token classification) outputs that way. Something similar can be done just for tokens + annotations (just gotta write the huggingface->spacy align|formatter)
BTW nice work on LightTag

Thanks!
I think displacy and this solve a similar pain, and are optimized for different tokenizations/libraries. I think the upcoming spacy3 has a strong focus on transformers, so integrating would probably make sense. I'll try to pr that once this goes in

@n1t0
Copy link
Member

n1t0 commented Nov 27, 2020

Thank you @talolard that looks great!
I'll try to have everything ready to merge this today!

@n1t0
Copy link
Member

n1t0 commented Nov 27, 2020

@talolard I think I don't have the authorization to push on the branch used for this PR. Maybe you disabled the option while opening the PR?

@talolard
Copy link
Contributor Author

@talolard I think I don't have the authorization to push on the branch used for this PR. Maybe you disabled the option while opening the PR?

Fixed

Copy link
Member

@n1t0 n1t0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now ready to be merged! Sorry it took me so long to finalize it, I was a bit overwhelmed with things left to do last week, and was off this week.

Here is a summary of the little things I changed:

  • Everything now leaves in a single file, under tools. So in order to import the visualizer and the annotations we can do:
from tokenizer.tools import EncodingVisualizer, Annotation
  • Updated the setup.py file to help it package the lib with the newly added files.
  • Updated a bit the docstrings, and included everything in the API Reference part of the docs.
  • I finally removed the language gallery. This notebook has been a great help in debugging what was happening with the various languages, but I fear that it might be misleading for the end-user. BERT and Roberta are both trained on English and so it does not represent the end result that a tokenizer trained on each specific language would produce.

Thanks again @talolard, this is a really great addition to the library and will be very helpful in understanding the tokenization. It will be included in the next release!

@n1t0 n1t0 merged commit 8916b6b into huggingface:master Dec 4, 2020
@talolard
Copy link
Contributor Author

talolard commented Dec 4, 2020

Yay!!

@talolard talolard deleted the feat/visualizer branch December 4, 2020 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants